136 research outputs found
CAAD 2018: Iterative Ensemble Adversarial Attack
Deep Neural Networks (DNNs) have recently led to significant improvements in
many fields. However, DNNs are vulnerable to adversarial examples which are
samples with imperceptible perturbations while dramatically misleading the
DNNs. Adversarial attacks can be used to evaluate the robustness of deep
learning models before they are deployed. Unfortunately, most of existing
adversarial attacks can only fool a black-box model with a low success rate. To
improve the success rates for black-box adversarial attacks, we proposed an
iterated adversarial attack against an ensemble of image classifiers. With this
method, we won the 5th place in CAAD 2018 Targeted Adversarial Attack
competition.Comment: arXiv admin note: text overlap with arXiv:1811.0018
CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network
In this paper, we propose an improvement of Adversarial Transformation
Networks(ATN) to generate adversarial examples, which can fool white-box models
and black-box models with a state of the art performance and won the 2rd place
in the non-target task in CAAD 2018
Exfiltration of Data from Air-gapped Networks via Unmodulated LED Status Indicators
The light-emitting diode(LED) is widely used as an indicator on the
information device. Early in 2002, Loughry et al studied the exfiltration of
LED indicators and found the kind of LEDs unmodulated to indicate some state of
the device can hardly be utilized to establish covert channels. In our paper, a
novel approach is proposed to modulate this kind of LEDs. We use binary
frequency shift keying(B-FSK) to replace on-off keying(OOK) in modulation. In
order to verify the validity, we implement a prototype of an exfiltration
malware. Our experiment show a great improvement in the imperceptibility of
covert communication. It is available to leak data covertly from air-gapped
networks via unmodulated LED status indicators.Comment: 12 pages, 7 figure
Enhanced Attacks on Defensively Distilled Deep Neural Networks
Deep neural networks (DNNs) have achieved tremendous success in many tasks of
machine learning, such as the image classification. Unfortunately, researchers
have shown that DNNs are easily attacked by adversarial examples, slightly
perturbed images which can mislead DNNs to give incorrect classification
results. Such attack has seriously hampered the deployment of DNN systems in
areas where security or safety requirements are strict, such as autonomous
cars, face recognition, malware detection. Defensive distillation is a
mechanism aimed at training a robust DNN which significantly reduces the
effectiveness of adversarial examples generation. However, the state-of-the-art
attack can be successful on distilled networks with 100% probability. But it is
a white-box attack which needs to know the inner information of DNN. Whereas,
the black-box scenario is more general. In this paper, we first propose the
epsilon-neighborhood attack, which can fool the defensively distilled networks
with 100% success rate in the white-box setting, and it is fast to generate
adversarial examples with good visual quality. On the basis of this attack, we
further propose the region-based attack against defensively distilled DNNs in
the black-box setting. And we also perform the bypass attack to indirectly
break the distillation defense as a complementary method. The experimental
results show that our black-box attacks have a considerable success rate on
defensively distilled networks
Incentivizing High-quality Content from Heterogeneous Users: On the Existence of Nash Equilibrium
We study the existence of pure Nash equilibrium (PNE) for the mechanisms used
in Internet services (e.g., online reviews and question-answer websites) to
incentivize users to generate high-quality content. Most existing work assumes
that users are homogeneous and have the same ability. However, real-world users
are heterogeneous and their abilities can be very different from each other due
to their diverse background, culture, and profession. In this work, we consider
heterogeneous users with the following framework: (1) the users are
heterogeneous and each of them has a private type indicating the best quality
of the content she can generate; (2) there is a fixed amount of reward to
allocate to the participated users. Under this framework, we study the
existence of pure Nash equilibrium of several mechanisms composed by different
allocation rules, action spaces, and information settings. We prove the
existence of PNE for some mechanisms and the non-existence of PNE for some
mechanisms. We also discuss how to find a PNE for those mechanisms with PNE
either through a constructive way or a search algorithm
Reversible Adversarial Examples
Deep Neural Networks have recently led to significant improvement in many
fields such as image classification and speech recognition. However, these
machine learning models are vulnerable to adversarial examples which can
mislead machine learning classifiers to give incorrect classifications. In this
paper, we take advantage of reversible data hiding to construct reversible
adversarial examples which are still misclassified by Deep Neural Networks.
Furthermore, the proposed method can recover original images from reversible
adversarial examples with no distortion.Comment: arXiv admin note: text overlap with arXiv:1806.0918
Large-scale Online Feature Selection for Ultra-high Dimensional Sparse Data
Feature selection with large-scale high-dimensional data is important yet
very challenging in machine learning and data mining. Online feature selection
is a promising new paradigm that is more efficient and scalable than batch
feature section methods, but the existing online approaches usually fall short
in their inferior efficacy as compared with batch approaches. In this paper, we
present a novel second-order online feature selection scheme that is simple yet
effective, very fast and extremely scalable to deal with large-scale ultra-high
dimensional sparse data streams. The basic idea is to improve the existing
first-order online feature selection methods by exploiting second-order
information for choosing the subset of important features with high confidence
weights. However, unlike many second-order learning methods that often suffer
from extra high computational cost, we devise a novel smart algorithm for
second-order online feature selection using a MaxHeap-based approach, which is
not only more effective than the existing first-order approaches, but also
significantly more efficient and scalable for large-scale feature selection
with ultra-high dimensional sparse data, as validated from our extensive
experiments. Impressively, on a billion-scale synthetic dataset (1-billion
dimensions, 1-billion nonzero features, and 1-million samples), our new
algorithm took only 8 minutes on a single PC, which is orders of magnitudes
faster than traditional batch approaches. \url{http://arxiv.org/abs/1409.7794}Comment: 13 page
A Large Scale Urban Surveillance Video Dataset for Multiple-Object Tracking and Behavior Analysis
Multiple-object tracking and behavior analysis have been the essential parts
of surveillance video analysis for public security and urban management. With
billions of surveillance video captured all over the world, multiple-object
tracking and behavior analysis by manual labor are cumbersome and cost
expensive. Due to the rapid development of deep learning algorithms in recent
years, automatic object tracking and behavior analysis put forward an urgent
demand on a large scale well-annotated surveillance video dataset that can
reflect the diverse, congested, and complicated scenarios in real applications.
This paper introduces an urban surveillance video dataset (USVD) which is by
far the largest and most comprehensive. The dataset consists of 16 scenes
captured in 7 typical outdoor scenarios: street, crossroads, hospital entrance,
school gate, park, pedestrian mall, and public square. Over 200k video frames
are annotated carefully, resulting in more than 3:7 million object bounding
boxes and about 7:1 thousand trajectories. We further use this dataset to
evaluate the performance of typical algorithms for multiple-object tracking and
anomaly behavior analysis and explore the robustness of these methods in urban
congested scenarios.Comment: 6 pages. This dataset are not available due to the data licens
Coherent Online Video Style Transfer
Training a feed-forward network for fast neural style transfer of images is
proven to be successful. However, the naive extension to process video frame by
frame is prone to producing flickering results. We propose the first end-to-end
network for online video style transfer, which generates temporally coherent
stylized video sequences in near real-time. Two key ideas include an efficient
network by incorporating short-term coherence, and propagating short-term
coherence to long-term, which ensures the consistency over larger period of
time. Our network can incorporate different image stylization networks. We show
that the proposed method clearly outperforms the per-frame baseline both
qualitatively and quantitatively. Moreover, it can achieve visually comparable
coherence to optimization-based video style transfer, but is three orders of
magnitudes faster in runtime.Comment: Corrected typo
DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense
Neural networks are vulnerable to adversarial examples, which poses a threat
to their application in security sensitive systems. We propose a Denoiser and
UPsampler Network (DUP-Net) structure as defenses for 3D adversarial point
cloud classification, where the two modules reconstruct surface smoothness by
dropping or adding points. In this paper, statistical outlier removal (SOR) and
a data-driven upsampling network are considered as denoiser and upsampler
respectively. Compared with baseline defenses, DUP-Net has three advantages.
First, with DUP-Net as a defense, the target model is more robust to white-box
adversarial attacks. Second, the statistical outlier removal provides added
robustness since it is a non-differentiable denoising operation. Third, the
upsampler network can be trained on a small dataset and defends well against
adversarial attacks generated from other point cloud datasets. We conduct
various experiments to validate that DUP-Net is very effective as defense in
practice. Our best defense eliminates 83.8% of C&W and l_2 loss based attack
(point shifting), 50.0% of C&W and Hausdorff distance loss based attack (point
adding) and 9.0% of saliency map based attack (point dropping) under 200
dropped points on PointNet.Comment: Published in IEEE ICCV201
- …